Frontiers in Bioengineering and Biotechnology
○ Frontiers Media SA
Preprints posted in the last 7 days, ranked by how well they match Frontiers in Bioengineering and Biotechnology's content profile, based on 88 papers previously published here. The average preprint has a 0.10% match score for this journal, so anything above that is already an above-average fit.
Hosseini-Yazdi, S.-S.; Fitzsimons, K.; Bertram, J. E.
Show abstract
Walking speed is widely used to assess gait recovery following stroke, yet it provides limited insight into how walking performance is mechanically organized. This study examined how center of mass (COM) work organization and propulsion-support coupling vary across walking speeds in individuals with post stroke hemiparesis to distinguish recovery of gait organization from recovery of limb level mechanical function. Eleven individuals with post stroke hemiparesis performed treadmill walking across speeds ranging from 0.2 to 0.7 m/s while ground reaction forces were recorded. Limb specific COM power and work were computed using an individual limbs framework, and interlimb asymmetry in net and positive work, along with the propulsion-support ratio (PSR), were quantified. A qualitative transition in gait organization was observed: at lower walking speeds, COM power exhibited a simplified two phase pattern, whereas at higher walking speeds (approximately >=0.5 m/s), a structured four phase COM power pattern emerged, including identifiable push off and preload phases. Despite this recovery of gait organization, interlimb work asymmetry remained elevated and paretic PSR remained reduced across all speeds, indicating persistent limb level mechanical deficits. These findings demonstrate that increases in walking speed and the emergence of typical COM power structure reflect recovery of gait organization rather than restoration of underlying limb level mechanical capacity. Consequently, walking speed alone is insufficient to characterize gait recovery after stroke, and biomechanically informed measures of COM work organization and propulsion-support coupling provide complementary insight by distinguishing organizational recovery from limb-level mechanical recovery.
Mizutani, N.; Nishizawa, S.; Enomoto, Y.; OKAMOTO, H.; Baba, R.; Misawa, A.; Takahashi, K.; Tada, Y.; LIN, Y.-C.; Shih, W.-P.
Show abstract
While the need for continuous blood pressure (BP) monitoring in Japan is high, there are no commercially available cuffless devices for personal daily monitoring use. Fingertip-based sensors are a promising alternative as they eliminate the discomfort of repeated cuff inflation. However, their reliability during winter has been a major technical limitation due to cold-induced peripheral vasoconstriction. This study aimed to address this issue by validating a novel fingertip-based continuous BP monitor used by exercising adults during summer and winter. Eleven community-dwelling older adults (mean age, 73.1 {+/-} 8.8 years) were included in this seasonal comparative study. During exercise, we compared a personal fingertip-based continuous monitor (ArteVu) with a standard oscillometric cuff device (Omron) in summer (mean, 26.5{degrees}C) and winter (mean, 7.4{degrees}C). The study also evaluated the device's accuracy during exercise-induced BP fluctuations and seasonal environmental changes. Awareness of the participants regarding BP management was also assessed using questionnaires. There were strong correlations for systolic BP (SBP) between summer and winter (r = 0.93 in summer; r = 0.88 in winter). Although the mean difference for the SBP was higher in winter than in summer (3.1 {+/-} 11.2 mmHg vs. 0.2 {+/-} 9.4 mmHg), the values remained within a clinically acceptable range for personal monitoring. Notably, 72.7% of participants reported that the ease of using the fingertip-based device significantly increased their awareness and motivation for daily BP management. This study confirms the feasibility of cuffless fingertip-based continuous BP monitoring across different seasons, including in winter. By overcoming the seasonal limitations, this device fills a critical gap in the Japanese health-monitoring market. Our findings support the development of smaller and more portable models, representing a shift from traditional "snapshot" cuff measurements to continuous and integrated lifestyle monitoring for older adults.
Hoque, A.; Rahman, M.; Basak, S. K.; Mamun, A. A.
Show abstract
BackgroundIn the absence of structured donor registries, social media platforms have become a dominant mechanism for blood donor recruitment in many low-resource settings. However, the implications of this shift for transfusion timeliness and system reliability remain unclear. ObjectiveTo evaluate the impact of social media-sourced donors on transfusion delay, donor reliability, and hemovigilance-related outcomes compared with conventional donor pathways. MethodsThis prospective analytical study included 400 transfusion episodes across tertiary hospitals in Bangladesh. Donor sources were categorized as social media (SM) or conventional (CON). The primary outcome was delay-to-transfusion. Secondary outcomes included donor-related irregularities, documentation completeness, near-miss events, and acute transfusion reactions. Multivariable logistic regression identified predictors of delay [≥]4 hours. ResultsSocial media-sourced donors were associated with significantly longer transfusion delays (5.98 vs 2.97 hours; p<0.001). Delay [≥]4 hours occurred in 83.6% of SM cases versus 17.6% of CON cases (OR 23.78). Donor-related irregularities were observed in 85% of SM episodes and absent in CON donors. Safety outcomes did not differ significantly between groups. Social media donor sourcing remained the strongest independent predictor of delay (adjusted OR 18.09). ConclusionUnregulated social media-based donor recruitment introduces substantial delays and undermines system reliability without improving access. Integration of digital tools into regulated donor systems is essential to strengthen transfusion timeliness and hemovigilance in resource-limited settings.
Chafetz, R.; Warshauer, S.; Waldron, S.; Kruger, K. M.; Donahue, S.; Bauer, J. P.; Sienko, S.; Bagley, A.; Courter, R.
Show abstract
Markerless motion capture has emerged as a potential substitute for traditional marker-based systems, offering scalable, non-invasive acquisition of human movement. Despite increasing adoption in research and sports applications, its clinical utility for children with complex gait patterns remains an open question. To address this gap, simultaneous marker-based and markerless data were collected in 202 pediatric children (12.1 {+/-} 3.9 years). Marker-based kinematics were processed using the Shriners Children's Gait Model (SCGM), while markerless outputs were computed using Theia3D with identical Cardan sequences. Agreement between systems was evaluated using statistical parametric mapping (SPM), root-mean-square error (RMSE), and a gait pattern classification based on the plantarflexor-knee extension index. Markerless output systematically underestimated pelvic tilt, hip rotation, and knee rotation and demonstrated reduced between-subject variance in the transverse plane. SPM revealed widespread waveform differences, although most were of negligible effect, especially in the sagittal plane. Mean sagittal-plane RMSEs were < 5{degrees} for the knee and ankle and < 8{degrees} for the pelvis and hip. Coronal-plane deviations were < 7{degrees}, whereas transverse-plane errors exceeded 10{degrees}. RMSE increased significantly with body mass index and use of a walker (p < 0.001). Agreement in sagittal-plane gait classification was moderate between systems ({kappa} = 0.60; 67% overall concordance). These results indicate that markerless motion capture is suitable for analyses emphasizing sagittal deviations but remains limited for applications requiring precise axial or frontal-plane estimation. Future work should address algorithmic underestimation of transverse motion and evaluate markerless performance across increasing severity of gait deviation.
Sarwin, G.; Ricciuti, V.; Staartjes, V. E.; Carretta, A.; Daher, N.; Li, Z.; Regli, L.; Mazzatenta, D.; Zoli, M.; Seungjun, R.; Konukoglu, E.; Serra, C.
Show abstract
Background and Objectives: We report the first intraoperative deployment of a real-time machine vision system in neurosurgery, derived from our previous anatomical detection work, automatically identifying structures during endoscopic endonasal surgery. Existing systems demonstrate promising performance in offline anatomical recognition, yet so far none have been implemented during live operations. Methods: A real-time anatomy detection model was trained using the YOLOv8 architecture (Ultralytics). Following training completion in the PyTorch environment, the model was exported to ONNX format and further optimized using the NVIDIA TensorRT engine. Deployment was carried out using the NVIDIA Holoscan SDK, the system ran on an NVIDIA Clara AGX developer kit. We used the model for real-time recognition of intraoperative anatomical structures and compared it with the same video labelled manually as reference. Model performance was reported using the average precision at an intersection-over-union threshold of 0.5 (AP50). Furthermore, end-to-end delay from frame acquisition to the display of the annotated output was measured. Results: A mean AP50 of 0.56 was achieved. The model demonstrated reliable detection of the most relevant landmarks in the transsphenoidal corridor. The mean end-to-end latency of the model was 47.81 ms (median 46.57 ms). Conclusion: For the first time, we demonstrate that clinical-grade, real-time machine-vision assistance during neurosurgery is feasible and can provide continuous, automated anatomical guidance from the surgical field. This approach may enhance intraoperative orientation, reduce cognitive load, and offer a powerful tool for surgical training. These findings represent an initial step toward integrating real-time AI support into routine neurosurgical workflows.
Chowdhury, A.; Irtiza, A.
Show abstract
Background: The urgent care departments in Europe face a structural paradox: accelerating digitalisation is accompanied by a patient population that is disproportionately unable to engage with standard digital tools. An internal analysis at the Emergency Department (Akutafdelingen) of Nordsjaellands Hospital in Hilleroed, Denmark found that 43% of emergency patients struggle with digital solutions - a figure that reflects the predictable composition of acute care populations rather than any individual failing. Objective: This paper presents the design, iterative development, and secondary validation of the ED Adaptive Interface (v5): a prototype adaptive patient terminal developed in response to this challenge. The system operationalises what the author terms impairment-first design - a methodology that treats the most constrained patient experience as the primary design problem and derives the standard experience as a subset. The interface configures itself in under ten seconds via nurse-led setup, adapting across four axes of impairment: visual, motor, speech, and cognitive. System: Version 4 supports five accessibility modes, a heatmap pain assessment grid, a Privacy and Dignity panel, a live workflow tracker with care notifications, structured dual-category help requests, and plain-language medical term definitions across four languages. Version 5, reported here for the first time, introduces a Condition Worsening Escalation button, a Referral Pathway Display, a "Why Am I Waiting?" triage explainer, a Symptom Progression Log, MinSP/Yellow Card Scan simulation, expanded language support (seven languages: English, Danish, Arabic with full RTL layout, Turkish, Romanian, Polish, and Somali), and an expanded ten-item Communication Board. The entire system runs as a single 79-kilobyte HTML file with zero infrastructure requirements. Methods: To base the design on patient-generated evidence, two independent social media threads were subjected to an inductive thematic analysis (Braun and Clarke, 2006): a primary corpus of 83 entries in the Facebook group Foreigners in Denmark (collected March 2026) and a corroborating corpus in an international community group in the Aarhus region (collected April 2026). All identifiers in both datasets were fully anonymised under GDPR Article 89 research provisions prior to analysis. No participants were contacted. Generative AI tools were used to assist with drafting, writing, and prototype code development; all scientific content, data collection, analysis, and conclusions are the sole responsibility of the authors. Results: The first discourse corpus produced five major themes corresponding to the five problem areas the prototype was designed to address: system navigation and triage literacy gaps (31 entries); language and cultural barriers (6 entries); communication failures during care (5 entries); staff overload and capacity constraints (8 entries); and pain and severity assessment failures (14 entries). The corroborating dataset supported all five themes and introduced two additional themes: differential treatment of international patients and medical gaslighting as a long-term pattern of patient advocacy failure. One structural finding - the five most-liked comments incorrectly criticised the original poster for self-referring when she had received explicit 1813 telephone triage approval - directly inspired the Referral Pathway Display and "Why Am I Waiting?" features in v5. Conclusions: The convergence of design rationale and independent social evidence across all five problem categories suggests that impairment-first design is not a niche accessibility concern but a structural approach to healthcare interface quality. The prototype is ready for a structured clinical pilot using the System Usability Scale (SUS) and semi-structured staff interviews. The long-term roadmap includes full MinSP integration, hospital PMS connectivity, and clinical validation.
Vikström, A.; Zarrinkoob, L.; Johannesdottir, M.; Wahlin, A.; Hellström, J.; Appelblad, M.; Holmlund, P.
Show abstract
Modelling of hemodynamics in the circle of Willis (CoW) depends on vascular segmentation, which may vary based on imaging modality. Computed tomography angiography (CTA) is commonly used in clinic but involves radiation and injection of contrast agents, whereas magnetic resonance angiography (MRA) offers a non-invasive alternative. This study aims to compare CoW morphology and modelled cerebral perfusion pressure of CTA and MRA segmentations, validating if MRA can replace CTA in modelling workflows. CTA and time-of-flight MRA (TOF-MRA) of the CoW was performed in 19 patients undergoing elective aortic arch surgery (67{+/-}7 years, 8 women). The CoW was semi-automatically segmented based on signal intensity thresholding. A TOF-MRA threshold was optimized against the CTA segmentation, using the CTA as reference standard. Computational fluid dynamics (CFD) modelling with boundary conditions based on subject-specific flow rates from 4D flow MRI simulated cerebral perfusion pressure in the segmented geometries. A baseline simulation and a unilateral brain inflow simulation, i.e., occlusion of a carotid, were carried out. Linear mixed models indicated there was no effect of choice of modality on either average arterial lumen area (CTA - TOF-MRA: -0.2{+/-}1.3 mm2; p=0.762) or baseline pressure drops (0.2{+/-}1.9 mmHg; p=0.257). In the unilateral inflow simulation, we found no difference in pressure laterality (-6.6{+/-}18.4 mmHg; p=0.185) or collateral flow rate (10{+/-}46 ml/min; p=0.421). TOF-MRA geometries can with signal intensity thresholding be matched to produce similar morphology and modelled cerebral perfusion pressure to CTA geometries. The modelled pressure drops over the collateral arteries were sensitive to the segmentation regardless of modality.
Gupta, V.; Podder, D.; Saha, S.; Shah, B.; Ghosh, S.; Kumar, J.; Jacoby, A. P.; Nag, A.; Chattopadhyay, D.; Javed, R.; Rath, A.; Chakraborty, S.; Demde, R.; Vinarkar, S.; Parihar, M.; Zameer, L.; Mishra, D.; Chandy, M.; Nair, R.
Show abstract
Waldenstrom macroglobulinemia (WM) is a rare indolent neoplasm characterized by presence of more than 10% lymphoid cells in BM that exhibit plasmacytoid or plasma cell differentiation that secretes an IgM monoclonal protein. This is a retrospective analysis of 89 patients of WM that describes the clinical and laboratory characteristics, treatment patterns and outcome of patients of WM. The median age of the entire cophort was 66 years with male predominance (67.4%). Most common presentations were symptoms pertaining to anemia (77.5%) and constitutional symptoms (33.7%). Median bone marrow lymphoplasmacytic cells were 41%. Positivity for MYD88 and CXCR4 mutations were seen in 81.8% and 2.4% cases. BR was the most common regimen used (52.8%). Overall response rates were seen at 87.8%. Median overall survival, progression free survival and time to next treatment is 8.49 years, 2.15 years and 3.88 years. BR regimen was associated with highest event free survival.
Valestrino, K. J.; Ihediwa, C. V.; Dorius, G. T.; Conger, A. M.; Glinka-Przybysz, A.; McCormick, Z. L.; Fogarty, A. E.; Mahan, M. A.; Hernandez-Bello, J.; Konrad, P. E.; Burnham, T. R.; Dalrymple, A. N.
Show abstract
ObjectivesEpidural spinal cord stimulation (SCS) is an emerging therapy for motor rehabilitation following spinal cord injury (SCI) and other motor disorders. Conventionally, SCS leads are placed along the dorsal spinal cord (SCSD), where stimulation activates large diameter afferent fibers, which indirectly activate motoneurons through reflex pathways. This leads to broad activation of flexor and extensor muscles and limited fine-tuned control of motor output. Targeting the ventral spinal cord (SCSV) may enable more direct activation of motoneuron pools, potentially improving the specificity of muscle activation; however, there is currently no established method to place leads ventrally. To address this, we evaluated the feasibility of four modified percutaneous implantation techniques to target the ventrolateral thoracolumbar spinal cord. Materials and methodsPercutaneous SCSV implantation was performed in three human cadaver torso specimens under fluoroscopic guidance. The following approaches were evaluated: sacral hiatus, transforaminal, interlaminar contralateral, and interlaminar ipsilateral. The leads in the latter 3 approaches were inserted between L1 and L5. Eighteen implants were attempted, with nine leads retained for analysis. Lead and electrode position were assessed using computed tomography (CT) with three-dimensional reconstruction, along with anatomical dissection to verify lead and electrode placement within the epidural space. ResultsSuccessful ventral epidural lead placement was achieved using all four implantation approaches. The sacral hiatus (16/16 electrodes) and transforaminal (8/8 electrodes) approaches resulted in exclusively ventrolateral placement. The interlaminar contralateral approach led to 27/32 electrodes positioned ventrolaterally and 5/32 dorsally. The interlaminar ipsilateral implantation approach led to 14/32 electrodes positioned ventrolaterally and 18/32 positioned ventromedially. ConclusionsThese findings demonstrate that ventral epidural SCS lead placement can be achieved using modified percutaneous implant techniques. The four approaches outlined here provide a clinically feasible pathway to SCSV and establishes a foundation for future clinical studies investigating SCSV for motor rehabilitation following SCI.
Farre, R.; Salama, R.; Rodriguez-Lazaro, M. A.; Kiarostami, K.; Fernandez-Barat, L.; Oliveira, V. D. C.; Torres, A.; Farre, N.; Dinh-Xuan, A. T.; Gozal, D.; Otero, J.
Show abstract
BackgroundThe COVID-19 pandemic exposed critical shortages of mechanical ventilators, particularly in low-resource settings. Disruptions in global supply chains and dependence on specialized components highlighted the need for scalable, locally manufacturing alternatives for emergency respiratory support. AimTo describe and evaluate a simplified, supply-chain-independent mechanical ventilator assembled from widely available automotive and simple hardware components, and intended as a last-resort solution. MethodsThe ventilator is based on a reciprocating air pump driven by an automotive windshield wiper motor coupled to parallel shaft bellows and readily assembled passive membrane valves, only requiring materials available from standard hardware retailers, minimal tools, and basic manual skills. Ventilator performance was assessed through bench testing using a patient model simulating severe lung disease in an adult (R=20 cmH2O{middle dot}s/L, C=15 mL/cmH2O) and pediatric (R=50 cmH2O{middle dot}s/L, C=10 mL/cmH2O) patients. Realistic proof of concept was performed in four mechanically ventilated 50-kg pigs. ResultsThe device delivered tidal volumes up to 600 mL and respiratory rates up to 45 breaths/min with PEEP up to 10 cmH2O, covering pediatric and adult ventilation ranges. In vivo testing showed that the ventilator maintained arterial blood gases within the targeted range. Technical details for ventilator construction are provided in an open-source video tutorial. DiscussionThis low-cost ventilator demonstrated adequate performance under demanding conditions. Although not a substitute for commercial intensive care ventilators, its simplicity, autonomy, and independence from fragile supply chains provide a potentially life-saving option in resource-constrained emergency scenarios.
Zeng, A.; O'Hagan, E. T.; Trivedi, R.; Ford, B.; Perry, T.; Turnbull, S.; Sheahen, B.; Mulley, J.; Sedhom, M.; Choy, C.; Biasi, A.; Walters, S.; Miranda, J. J.; Chow, C. K.; Laranjo, L.
Show abstract
Background: Continuous adhesive patch electrocardiographic (ECG) wearables are increasingly prescribed. Patient experience with these devices can influence adherence, but research in this area is limited. This study aimed to explore the perceptions and experiences of patients receiving wearable cardiac monitoring technology as part of their routine care through the lens of treatment burden. Methods: This was a qualitative study with semi-structured phone interviews conducted between February and May 2024. We recruited participants from primary care and outpatient clinics using maximum variation sampling to ensure diversity in sex, ethnicity, and education levels. Interviews were audio-recorded, transcribed, and analysed using reflexive thematic analysis. Results: Sixteen participants (mean age 51 years, 63% female) were interviewed (average duration: 33 minutes). Three themes were developed: 1) ?Experience using the device: Burden vs Ease of Use?, which captured participants? perceptions of how easily they could integrate the device in their daily lives; 2) ?Individual variability in responses to ECG self-monitoring? covered participants? emotional and cognitive response to knowing their heart rhythm was monitored; and 3) ?The care process shapes patient experiences? reflected support preferences during the set-up and monitoring period and the uncertainty regarding timely clinical and device feedback. Conclusions: Patients valued cardiac wearables for facilitating diagnosis and felt reassured knowing they were clinically monitored. However, gaps in information provided to patients seemed to cause anxiety for some participants. These concerns could be mitigated through clearer clinician communication and patient education at the time of prescription.
Chihara, A.; Mizuno, R.; Kagawa, N.; Takayama, A.; Okumura, A.; Suzuki, M.; Shibata, Y.; Mochii, M.; Ohuchi, H.; Sato, K.; Suzuki, K.-i. T.
Show abstract
Fluorescent in situ hybridization (FISH) enables highly sensitive, high-resolution detection of gene transcripts. Moreover, by employing multiple probes, this technique allows for multiplexed, simultaneous detection of distinct gene expression patterns spatiotemporally, making it a valuable spatial transcriptomics approach. Owing to these advantages, FISH techniques are rapidly being adopted across diverse areas of basic biology. However, conventional protocols often rely on volatile, toxic reagents such as formalin or methanol, posing potential health risks to researchers. Here, we present a safer protocol that replaces these chemicals with low-toxicity alternatives, without compromising the high detection sensitivity of FISH. We validated this protocol using both in situ hybridization chain reaction (HCR) and signal amplification by exchange reaction (SABER)-FISH in frozen sections of various model organisms, including mouse (Mus musculus), amphibians (Xenopus laevis and Pleurodeles waltl), and medaka (Oryzias latipes). Our results demonstrate successful multiplexed detection of morphogenetic and cell-type marker genes in these model animals using this safer protocol. The protocol has the additional advantage of requiring no proteolytic enzyme treatment, thus preserving tissue integrity. Furthermore, we show that this protocol is fully compatible with EGFP immunostaining, allowing for the simultaneous detection of mRNAs and reporter proteins in transgenic animals. This protocol retains the benefits of highly sensitive, multiplexed, and multimodal detection afforded by integrating in situ HCR and SABER-FISH with immunohistochemistry, while providing a safer option for researchers, thereby offering a valuable tool for basic biology.
Walton, A. E.; Versalovic, E.; Merner, A. R.; Lazaro-Munoz, G.; Bush, A.; Richardson, M.
Show abstract
Patients who participate in intracranial neuroscience research make invaluable contributions to our understanding of the brain, accelerating the development of neurotechnological interventions. Engagement of patients as part of this research presents unique challenges, where study goals can be distant from immediate clinical applications and require specialized domain knowledge. Yet methods for meaningfully integrating patient communities as part of these research efforts is essential, as intracranial neuroscience guides the application of artificial intelligence for understanding and enhancing human cognition. In order to identify what patients consider meaningful research engagement we interviewed individuals who participated in a study during their Deep Brain Stimulation (DBS) surgery and attended a group event where they interacted with our research team. Analysis of semi-structured interviews identified four main themes: interest in science and the future of clinical care, contributing to science to improve lives, connecting with others, and accessibility considerations. Based on these insights, we propose strategies for transformational participation of patient communities in intracranial neuroscience research with respect to engagement objectives, communication and scope. This approach offers a foundation for sustaining relationships between scientists and communities rooted in trust and transparency, to ensure that impacts of neurotechnology on human health and cognition are aligned with patient needs as well as desired public values.
Varisco, G.; Plantin, J.; Almeida, R.; Palmcrantz, S.; Astrand, E.
Show abstract
Stroke is the third leading cause of death and disability combined worldwide and often results in hemiparesis. Functional magnetic resonance imaging (fMRI) is a non-invasive technique used to investigate changes in brain activations during tasks aimed at restoring the lost motor function. Participants with chronic stroke and residual hemiparesis in the upper extremity were recruited for a clinical intervention that included neurofeedback training and fMRI sessions with motor-execution and motor-imagery tasks. The present study provides a baseline characterization of brain activations prior to neurofeedback training. Since lesion site and volume varied across participants, two fMRI preprocessing pipelines were applied. The first one was used for twelve participants with lesions restricted to a single hemisphere and for one participant with small secondary lesions in the contralesional hemisphere, whereas the second one was used for two participants with large bilateral lesions. These were followed by quality control measures and statistical analysis. First-level (i.e., single-participant) analysis returned the strongest and most extensive activation across participants during motor-execution tasks, with clusters identified in the ipsilesional parietal lobe, bilateral occipital lobes, and cerebellum after Family-Wise Error correction. Second-level (i.e., group-level) analysis involving participants who underwent the first fMRI preprocessing pipeline revealed a significant cluster in the cerebellum after False Discovery Rate correction. These results are consistent with previous studies involving participants with chronic stroke performing motor-tasks. Cerebellar recruitment observed consistently across participants could reflect compensatory mechanisms supporting motor control after stroke.
Huang, C.-H. S.; Kuehne, L. M.; Jacuzzi, G.; Olden, J. D.; Seto, E.
Show abstract
Military aviation training noise remains understudied despite its widespread impacts across urban, rural, and wilderness areas. The predominance of low-frequency noise and repetitive training can create pervasive noise pollution, yet past research often fails to capture the full range of health and quality-of-life effects. This study analyzed two complaint datasets related to Whidbey Island Naval Air Station noise: U.S. Navy records (2017-2020) and Quiet Skies Over San Juan County data (2021-2023). We analyzed and mapped sentiment intensity from noise complaints relative to modeled annual noise exposure, developed a typology to classify impacts, and modeled the environmental and operational factors influencing complaints. Findings revealed widespread negative sentiment and anger, often beyond the bounds of estimated noise contours, suggesting that annual cumulative noise models inadequately estimate community impacts. Complaints consistently highlighted sleep disturbance, hearing and health concerns, and compromised home environments due to shaking, vibration, and disruption of daily life. Residents also reported significant social, recreational, and work disruptions, along with feelings of fear, helplessness, and concern for children's well-being. The number of complaints were strongly associated with training schedules, with late-night sessions being the strongest predictor. A delayed response pattern suggests residents reach a frustration threshold before filing complaints. Overall, our findings demonstrate persistent negative sentiment and diverse impacts from military aviation noise. Results highlight the need for improved noise metrics, modeling and operational adjustments to mitigate the most disruptive effects.
Emerick, M.; Grahn, J. A.
Show abstract
Walking impairments in Parkinsons disease (PD), including reduced speed, cadence, and stride length, and increased variability, impair mobility and raise fall risk. Conventional treatments may fail to address these deficits, underscoring the need for complementary non-invasive alternatives. This study examined whether combining rhythmic auditory cueing with transcranial direct current stimulation (tDCS) over the supplementary motor area (SMA), a critical region for internally-generated movement, would enhance gait performance in PD. Thirty-three participants with PD and thirty-two healthy controls completed two sessions (anodal vs. sham tDCS) with gait assessed during stimulation, immediately after stimulation, and 15 minutes after stimulation under two auditory conditions: walking in silence and walking to music paced 10% faster than baseline cadence. Spatiotemporal, variability, and stability gait parameters were analyzed using linear mixed-effects models. Rhythmic auditory cueing significantly increased cadence and speed during, immediately after, and especially 15 minutes after stimulation, suggesting sustained effects of rhythmic entrainment. Anodal tDCS produced faster cadence, as well as lower stride time variability and stride width, particularly in individuals with PD. Although both music and anodal tDCS affected gait, no interaction was observed, indicating independent effects. Individuals with PD had greater gait variability overall, and adjusted temporal gait parameters less to music than healthy controls did. Anodal stimulation reduced walking variability in PD, reducing the group differences observed under sham conditions. These findings suggest that rhythmic cueing and SMA stimulation target complementary mechanisms, highlighting the promise of combined tDCS-music interventions for gait rehabilitation in PD.
Thomas, C.; Kim, J. Y.; Hasan, A.; Kpodzro, S.; Cortes, J.; Day, B.; Jensen, S.; LHuillier, S.; Oden, M. O.; Zumbado Segura, S.; Maurer, E. W.; Tucker, S.; Robinson, S.; Garcia, B.; Muramalla, E.; Lu, S.; Chawla, N.; Patel, M.; Balu, S.; Sendak, M.
Show abstract
Safety net healthcare delivery organizations (SNOs) serve vulnerable populations but face persistent challenges in adopting new technologies, including AI. While systematic barriers to technology adoption in SNOs are well documented, little is known about how AI is implemented in these settings. This study explored real-world AI adoption in SNOs, focusing on identifying barriers encountered across the AI lifecycle and strategies used to overcome them. Five SNOs in the U.S. participated in a 12-month technical assistance program, the Practice Network, to implement AI tools of their choosing. Observed barriers and mitigation strategies were documented throughout program activities and, at the conclusion of the program, reviewed and refined with participants using a participatory research approach to ensure findings reflected lived experiences and organizational contexts. Key barriers emerged during the Integration and Lifecycle Management phases and included gaps in AI performance evaluation and impact assessments, communication with patients about AI use, foundational AI education, financial resources for purchasing and maintaining AI tools, and AI governance structures. Effective strategies for addressing these barriers were primarily supported through centralized expertise, structured guidance, and peer learning. These findings provide granular, actionable insights for SNO leaders, offering guidance for anticipating barriers and proactively planning mitigation strategies. By including SNO perspectives, the study also contributes to the broader health AI ecosystem and underscores the importance of participatory, collaborative approaches to support safe, effective, and ethical AI adoption in resource-constrained settings. Author SummarySafety net organizations (SNOs) are healthcare systems that primarily serve low-income and underinsured patients. While interest in artificial intelligence (AI) in healthcare has grown rapidly, little is known about how these organizations experience AI adoption in practice. In this study, we partnered with five SNOs over a 12-month program to document the challenges they encountered when implementing AI tools and the strategies they used to address them. We worked closely with SNO staff throughout the process to ensure our findings reflected their lived experiences with AI implementation. We found that the most common challenges arose when organizations tried to integrate AI into daily operations and monitor and maintain those tools over time. Specific barriers included difficulty evaluating whether AI was performing as expected, limited guidance on communicating with patients about AI use, a lack of resources for staff training, limited financial resources, and the absence of formal governance structures. Successful strategies for overcoming these challenges drew on shared knowledge and structured support provided by the program, as well as learning from peer organizations. These findings offer practical guidance for SNO leaders planning or managing AI adoption, and contribute to a broader conversation about what is required to implement AI safely and effectively in healthcare settings that serve the most medically and socially vulnerable patients.
Welch, A. M.; Beseler, C. L.; Cross, S. T.
Show abstract
Purpose: Alpha-gal syndrome (AGS) is an emerging health issue. This syndrome, caused by the bites of ticks, induces allergic reactions to the sugar molecule galactose-alpha-1,3-galactose after exposure to non-primate mammalian meat and other byproducts. Agricultural workers spend significant time outdoors placing them at an increased risk for tick bites and tick-borne diseases, like AGS. This study aimed to characterize farmers and ranchers' prior knowledge, symptomology, and diagnostic experiences with AGS. Methods: We conducted a cross-sectional survey of more than 200 farmers and ranchers with a self-reported AGS diagnosis. The survey captured farmers and ranchers' experiences related to prior knowledge and experience with tick bites and AGS, reported symptoms, and obtaining a diagnosis. Findings: A total of 201 respondents across 26 states participated in the survey, with the majority from Missouri and Oklahoma. We identified four distinct symptom clusters, with the most reported symptoms being abdominal cramping, diarrhea, itchy skin, and nausea. Women more often reported gastrointestinal discomfort, and men were more likely to be in the mild symptom category. On average, participants reported 2.98 medical provider visits before receiving a diagnosis, most being diagnosed by general practitioners and allergists. Conclusions: No previous studies have focused on the symptom and diagnostic experiences of farmers and ranchers with AGS. Capturing such data is essential as these workers may experience unique occupational challenges following AGS diagnosis. The diagnostic experience data support a continuing need to educate and empower AGS patients and providers, especially agricultural workers and providers serving rural communities.
Pandit, A. S.; Chaudri, T.; Chaudri, Z.; Vasilica, A. M.; Dhaliwal, J.; Sayar, Z.; Cohen, H.; Westwood, J. P.; Toma, A. K.
Show abstract
Background Venous thromboembolism (VTE) remains a major cause of perioperative morbidity in cranial neurosurgery, yet clinical practice varies widely, and formal guidelines are inconsistent. Understanding internationally sampled neurosurgical practice is essential for informing consensus and future trials. Methods An international, 2-stage cross-sectional, internet-based survey was conducted. Practising neurosurgeons performing elective adult cranial surgery were eligible. Descriptive statistics were used to summarise practice. Responses covered patterns of pre-operative haemostasis decision making, use and timing of mechanical and/or chemical prophylaxis, use of perioperative imaging prior to anticoagulation, and frequency of clinical assessment for VTE. Associations with geographical income status, subspecialty, and years post-certification were statistically tested. Practice heterogeneity was quantified and contextual influence was summarised using mean effect sizes across stratifying variables in order to determine domains of true equipoise. Results Of 585 responses, 456 (78%) met criteria for inclusion: representing 322 units across 78 countries (71% high-income). Thirteen per cent reported no departmental VTE plan; 23% followed no guidelines and 12% used multiple. Routine pre-operative testing almost universally included haemoglobin/platelets/haematocrit, with fibrinogen more common in high-income settings. Compared with high-income country respondents, low- and middle-income respondents reported higher haemoglobin transfusion thresholds (>90 g/dL; p<0.001) and shorter antiplatelet interruption (p[≤]0.03), and less frequent outpatient VTE assessment (p<0.001). Mechanical prophylaxis was common (TEDs 81%, IPC 62%), typically started pre- or intra-operatively. Among those completing the chemoprophylaxis section (n=310), 57% required a CT or MRI scan before LMWH which was then initiated on average 31.4 hours after surgery. 1% of respondents did not routinely use LMWH. Many clinical decisions demonstrated statistical equipoise ie. high heterogeneity with low contextual influence. Conclusion Peri-operative haemostasis and VTE prophylaxis practices in adult elective cranial neurosurgery vary substantially worldwide, with some decisions reflecting geographical or socioeconomic differences and many others reflecting true clinical equipoise rather than contextual determinants. By mapping contemporary real-world practice across diverse health-system contexts, this study provides a necessary empirical foundation for rational trial design and future guideline development.
Ballatore, F.; Madzvamuse, A.; Jebane, C.; Helfer, E.; Allena, R.
Show abstract
Understanding how cells migrate through confined environments is crucial for elucidating fundamental biological processes, including cancer invasion, immune surveillance, and tissue morphogenesis. The nucleus, as the largest and stiffest cellular organelle, often limits cellular deformability, making it a key factor in migration through narrow pores or highly constrained spaces. In this work, we introduce a geometric surface partial differential equation (GS-PDE) model in which the cell plasma membrane and nuclear envelope are described as evolving energetic closed surfaces governed by force-balance equations. We replicate the results of a biophysical experiment, where a microfluidic device is used to impose compressive stresses on cells by driving them through narrow microchannels under a controlled pressure gradient. The model is validated by reproducing cell entry into the microchannels. A parametric sensitivity analysis highlights the dominant influence of specific parameters, whose accurate estimation is essential for faithfully capturing the experimental setup. We found that surface tension and confinement geometry emerge as key determinants of translocation efficiency. Although tailored to this specific setup for validation purposes, the framework is sufficiently general to be applied to a broad range of cell mechanics scenarios, providing a robust and flexible tool for investigating the interplay between cell mechanics and confinement. It also offers a solid foundation for future extensions integrating more complex biochemical processes such as active confined migration.